Machine Learning as a microservice in a Docker container on a Kubernetes cluster — say what?

It is always fascinating to see the versatile ways in which machine learning can be used. At Outfittery, algorithms help the experts select the most suitable outfits for customers — quite literally. In this interview ML Conference speaker Jesper Richter-Reichhelm, CTO at Outfittery GmbH, explains how the company uses machine learning and which frameworks they use. He also tells us who makes better suggestions — human beings or machines.

JAXenter: At Outfittery, you don’t only use outfit experts, but also machine learning. How exactly is this possible? Did you first test what the algorithm and the human expert would suggest —separately— or did you have the algorithm solve some issues and the expert other problems?

Jesper Richter-Reichhelm: We apply data science in two different fields: on the one hand, for the optimization of business processes, e. g. in the allocation of the purchasing budget. On the other hand, we use machine learning to support stylists in their decision-making processes, for example by recommending clothing sizes.

The latter is about providing the human experts with assistance resulting from the analysis of the data. This increases the efficiency of the goods we send to the customer, especially the quality of the content of the box that we send to the customer.

JAXenter: What were the first steps from the technical side?

Jesper Richter-Reichhelm: We have been using machine learning in the company for several years. For almost two years now, however, we have taken the integration of machine learning algorithms into the IT platform to a new level by implementing what we call “smart gateway”.

From a technical point of view, we can call up a service that makes a decision: a so-called decision point. For example, for an A-B test, this can be the simple decision as to which test group a customer is in. However, there can also be an arbitrarily complex algorithm behind it, which analyzes the data of a customer and creates an instruction for action, e. g. not to send an article with the wrong size, style or price to a customer. From a technical point of view, the caller is only a REST call from one microservice to another in both cases.

We are currently in the process of migrating the rest of the approximately 60 microservices to Docker and Kubernetes.

In the beginning, this Smart Gateway was still very simple, rather a prototype. In the meantime, the service has undergone various evolutionary steps and forms the standard for the integration of machine learning algorithms into the production system.

JAXenter: Which tools or frameworks do you use for the machine learning part of your system?

Stay tuned!
Learn more about ML Conference:

Jesper Richter-Reichhelm: We mainly use Python, Sci-Kit Learn, TensorFlow, Keras, Google Cloud and that Smart Gateway mentioned above.

JAXenter: You ended up having a microservice in a Docker container of Machine Learning on a Kubernetes cluster. So you have pretty much every new technique in your system. How do you deal with the complexity involved? Or is it not that complex in the end?

Jesper Richter-Reichhelm: Packing algorithms in Docker containers has simplified the handling, i. e. the transfer of data scientists to our Data Delivery IT team — so we did it. Kubernetes then made it possible to orchestrate these containers more effectively in the second stage of expansion.

This helps us with fail-safe performance but also with scaling. Our experience with it was so good that we are currently in the process of migrating the rest of the approximately 60 microservices to Docker and Kubernetes.

The use of Docker and Kubernetes simplifies our everyday life, even though we had to learn something new. But I see that as an advantage rather than a disadvantage. Standing still is boring; we are constantly trying to improve ourselves.

For example, we have implemented the Smart Gateway and some related microservices in Kotlin. Here, too, the choice of technology is very satisfactory and the additional complexity is really manageable.

The use of Docker and Kubernetes simplifies our everyday life, even though we had to learn something new.

JAXenter: How do you validate the results of the algorithms? Is there some kind of test stage before you go live?

Jesper Richter-Reichhelm: It is common practice to start with a proof of concept, the performance of which we also validate with data that was not used to train the algorithm. If the algorithm is successful, we deploy it in the so-called shadow mode.

Live data is transferred to the algorithm, but the result is not yet applied in production. So we can check again “live” if everything works as it was intended. We then roll out the algorithm as part of an A-B test and can check the influence on relevant KPIs again.

JAXenter: Last but not least — Who makes better suggestions? The (human) expert or the algorithm?

Jesper Richter-Reichhelm: We believe that the combination of human intelligence (stylists) in combination with artificial intelligence is the right formula to provide the customer with a relevant, personal fashion selection.

For us, the stylist is a supplement to artificial intelligence — his personal questions cover a part that AI can’t do yet. Fashion is ultimately a very emotional and personal topic: What do I wear? What do I say about it? Who am I? There are many irrational components involved, which is why we believe that man cannot be replaced.

Thank you!

 

Melanie Feldmann studied Technology Journalism at the Bonn-Rhein-Sieg University of Applied Sciences, and works at S&S Media since Oktober 2015.

 

 

 

 

 

Top Articles About Tools, APIs & Frameworks

Behind the Tracks

DON'T MISS ANY ML CONFERENCE NEWS!